翻訳と辞書
Words near each other
・ Intersection syndrome
・ Intersection theorem
・ Interpreting notes
・ Interpretive communities
・ Interpretive Dance
・ Interpretive dance
・ Interpretive discussion
・ Interpretive neo-street
・ Interpretive planning
・ Interpretivism
・ Interpretivism (legal)
・ InterPride
・ Interprime
・ Interprint
・ InterPro
Interprocedural optimization
・ Interprofessional education
・ Interprofessional Guaranteed Minimum Wage
・ Interprovincial Amateur Hockey Union
・ Interprovincial Cooperatives Ltd v R
・ Interprovincial Lottery Corporation
・ Interprovincial Professional Hockey League
・ Interprovincial Standards
・ Interprovinciale Bridge
・ Interproximal reduction
・ Interpublic Group of Companies
・ Interpunct
・ Interpupillary distance
・ Interquartile mean
・ Interquartile range


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Interprocedural optimization : ウィキペディア英語版
Interprocedural optimization
Interprocedural optimization (IPO) is a collection of compiler techniques used in computer programming to improve performance in programs containing many frequently used functions of small or medium length. IPO differs from other compiler optimization because it analyzes the entire program; other optimizations look at only a single function, or even a single block of code.
IPO seeks to reduce or eliminate duplicate calculations, inefficient use of memory, and to simplify iterative sequences such as loops. If there is a call to another routine that occurs within a loop, IPO analysis may determine that it is best to inline that. Additionally, IPO may re-order the routines for better memory layout and locality.
IPO may also include typical compiler optimizations on a whole-program level, for example dead code elimination, which removes code that is never executed. To accomplish this, the compiler tests for branches that are never taken and removes the code in that branch. IPO also tries to ensure better use of constants. Modern compilers offer IPO as an option at compile-time. The actual IPO process may occur at any step between the human-readable source code and producing a finished executable binary program.
Whole program optimization is the compiler optimization of a program using information about all the modules in the program. Normally, optimizations are performed on a per module, "compiland", basis; but this approach, while easier to write and test and less demanding of resources during the compilation itself, does not allow certainty about the safety of a number of optimizations such as aggressive inlining and thus cannot perform them even if they would actually turn out to be efficiency gains that do not change the semantics of the emitted object code.
Link-time optimization is a type of program optimization performed by a compiler to a program at link time. Link time optimization is relevant in programming languages that compile programs on a file-by-file basis, and then link those files together (such as C and Fortran), rather than all at once (such as Java's "Just in time" (JIT) compilation).
Once all files have been compiled separately into object files, a compiler links (merges) the object files into a single file, the executable. As it is in the process of doing this (or immediately thereafter) a compiler with link-time optimization capabilities can apply various forms of interprocedural optimization to the newly merged file. The process of merging the files may have removed the knowledge limitations that occurred in the earlier stages of compilation, allowing for deeper analysis, more optimization, and ultimately better program performance.
==Analysis==
The objective of any optimization is to have the program run as swiftly as possible; the problem is that it is not possible for a compiler to correctly analyze a program and determine what it
*will
* do, much less what the programmer
*intended
* for it to do. By contrast, human programmers start at the other end with a purpose, and attempt to produce a program that will achieve it, preferably without expending a lot of thought in the process.
For various reasons, including readability, programs are frequently broken up into a number of procedures, which handle a few general cases. However, the generality of each procedure may result in wasted effort in specific usages. Interprocedural optimization represents an attempt at reducing this waste.
Suppose there is a procedure that evaluates F(x), and the code requests the result of F(6) and then later, F(6) again. This second evaluation is almost certainly unnecessary: the result could have instead been saved and referred to later, assuming that F is a pure function. This simple optimization is foiled the moment that the implementation of F(x) becomes impure; that is, its execution involves reference to parameters other than the explicit argument 6 that have been changed between the invocations, or side effects such as printing some message to a log, counting the number of evaluations, accumulating the CPU time consumed, preparing internal tables so that subsequent invocations for related parameters will be facilitated, and so forth. Losing these side effects via non-evaluation a second time may be acceptable, or they may not.
More generally, aside from optimization, the second reason to use procedures is to avoid duplication of code that would produce be the same results, or almost the same results, each time the procedure is performed. A general approach to optimization would therefore be to reverse this: some or all invocations of a certain procedure are replaced by the respective code, with the parameters appropriately substituted. The compiler will then try to optimize the result.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Interprocedural optimization」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.